$$\mathcal {C}^k$$-Continuous Spline Approximation with TensorFlow Gradient Descent Optimizers

نویسندگان

چکیده

In this work we present an “out-of-the-box” application of Machine Learning (ML) optimizers for industrial optimization problem. We introduce a piecewise polynomial model (spline) fitting $$\mathcal {C}^k$$ -continuous functions, which can be deployed in cam approximation setting. then use the gradient descent context provided by machine learning framework TensorFlow to optimize parameters with respect quality and -continuity evaluate available optimizers. Our experiments show that problem solution is feasible using tapes AMSGrad SGD best results among Furthermore, novel regularization approach improve convergence. Although remaining discontinuities after are small, eliminate these errors presented algorithm has impact only on affected derivatives local spline segment.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reference-shaping adaptive control by using gradient descent optimizers

This study presents a model reference adaptive control scheme based on reference-shaping approach. The proposed adaptive control structure includes two optimizer processes that perform gradient descent optimization. The first process is the control optimizer that generates appropriate control signal for tracking of the controlled system output to a reference model output. The second process is ...

متن کامل

Continuous Generalized Gradient Descent

This article derives characterizations and computational algorithms for continuous general gradient descent trajectories in high-dimensional parameter spaces for statistical model selection, prediction, and classification. Examples include proportional gradient shrinkage as an extension of LASSO and LARS, threshold gradient descent with right-continuous variable selectors, threshold ridge regre...

متن کامل

A Gradient Descent Approximation for Graph Cuts

Graph cuts have become very popular in many areas of computer vision including segmentation, energy minimization, and 3D reconstruction. Their ability to find optimal results efficiently and the convenience of usage are some of the factors of this popularity. However, there are a few issues with graph cuts, such as inherent sequential nature of popular algorithms and the memory bloat in large s...

متن کامل

Stochastic Gradient Descent in Continuous Time

We consider stochastic gradient descent for continuous-time models. Traditional approaches for the statistical estimation of continuous-time models, such as batch optimization, can be impractical for large datasets where observations occur over a long period of time. Stochastic gradient descent provides a computationally efficient method for such statistical learning problems. The stochastic gr...

متن کامل

Gradient descent algorithms for quantile regression with smooth approximation

Gradient based optimization methods often converge quickly to a local optimum. However, the check loss function used by quantile regression model is not everywhere differentiable, which prevents the gradient based optimization methods from being applicable. As such, this paper introduces a smooth function to approximate the check loss function so that the gradient based optimization methods cou...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-25312-6_68